Unlearning how to code (and think)
I struggled a bit with this text, as I was undecided if I wanted to focus on thinking or coding. I still think this text isn’t perfect, but I decided to publish it anyway—this is a blog, not a newspaper, after all.
With AI, one of my concerns is that it lures us into thinking less—or at least that it lures us into outsourcing more demanding thought processes like analytical thinking. We don’t know well how our nervous system works, but what we know with certainty is that we have to challenge our brains if we want to stay sharp. Learning a language, a new musical instrument, or just taking up a new hobby usually improves the overall functionality of our brains and reduces the chance of forgetfulness or possibly even dementia. The saying that the brain is like a muscle—use it or lose it—appears to be correct. We also know that childhood development until about age 21 is as impactful as it gets when looking at our brain function and long-term health.
The decreased practice of mentally demanding things—like analytical thinking—feels like a dangerous experiment to run on ourselves and our teenagers. I believe this to be true in general, but let’s get to the point: programming.
I believe software engineers/programmers/coders will, through laziness and pressure to be faster than “the competition”, increasingly rely on AI to do (some of) the thinking for us. I know that this is no new fear and that I am not alone in fearing this. I am also guilty of using AI to save time and think for me: Just a few days ago I published an analysis done by AI, instead of thinking for myself.
I believe that soon the ability to code and understand code will be even rarer than it is today. I expect it will become socially acceptable to state that one can code when one can merely ask AI to write code for one. Some of those calling themselves “developers” or “engineers” will be unable to understand the code they publish, because they used AI tools to generate it. Many of those using AI-generated code will not know what it does exactly, or how to fix or change it.
I am sure that under pressure to deliver digital products faster than the competition, unchecked AI-generated code will creep into products—even those products from companies with strict AI policies. As long as this code does what it’s supposed to and passes all tests (if there are any) engineers will be rewarded for their high performance. High performance coming from copying & pasting code from AI into products without checking it.
The consequences of this are many and diverse. It opens up new vectors of attack, but it also leads to problems when attempting to maintain this code or fixing bugs. It also means those engineers who use AI instead of honing their craft are more likely to survive layoffs, seeing as their productivity is better than that of an engineer writing their code manually. Overall, I see a future in which a brain drain in the software engineering field will lead to software we rely on, but have no clue how it works and how to mend when it breaks.